Skip to content

Optimize Dockerfile and CI#19

Merged
M03ED merged 6 commits intoPasarGuard:devfrom
wavy-cat:main
Dec 4, 2025
Merged

Optimize Dockerfile and CI#19
M03ED merged 6 commits intoPasarGuard:devfrom
wavy-cat:main

Conversation

@wavy-cat
Copy link
Contributor

@wavy-cat wavy-cat commented Dec 3, 2025

This PR optimizes the Dockerfile and related workflows, reducing the final image size by ~17% and narrowing the attack surface. Key points:

  • The docker-build and docker-build-dev workflows no longer build the binary. The build process is now fully handled inside the Dockerfile.
  • Instead of QEMU, native cross-platform Go builds are used.
  • Dockerfile.multi has been removed in favor of a single Dockerfile. I don’t fully understand the original reason for this separation, so if it was important, please explain the purpose and I will try to adapt the PR accordingly (or you can adjust it yourselves).
  • Most importantly, the base image has been changed from Alpine to distroless (static-debian12). Distroless reduces image size and lowers the attack surface (as also confirmed by Docker Scout analysis).
  • The Xray install script has been moved into a new scripts directory. Previously it lived in the Marzban repository, which is no longer maintained, so for safety reasons I decided to relocate it here.
  • The Makefile was updated to make use of the script file in install_xray.

Additionally, my initial intention was simply to switch the final image from Alpine to Distroless, but after reviewing the CI setup and seeing the legacy Marzban scripts in use, I ended up doing a bit more work in this PR than originally planned :)

Summary by CodeRabbit

  • Chores
    • Simplified Docker build process by streamlining workflows and removing unnecessary intermediate build steps.
    • Optimized Docker images with distroless runtime for improved security and reduced image size.
    • Migrated binary installation to local build scripts for better build portability and reliability.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 3, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The changes consolidate the Docker build process by removing multi-architecture binary pre-builds from CI workflows and eliminating Dockerfile.multi, implementing a platform-aware multi-stage Dockerfile with distroless runtime, and replacing remote installer execution with a local Bash script for Xray installation.

Changes

Cohort / File(s) Summary
CI workflow simplification
.github/workflows/docker-build-dev.yml, .github/workflows/docker-build.yml
Removed BINARY_NAME environment variable, Go setup steps, cross-architecture binary builds, QEMU setup, and multi-architecture staging. Updated Dockerfile references from Dockerfile.multi to Dockerfile. Removed GHA cache mode configuration in the latter workflow.
Docker build consolidation
Dockerfile
Rewrote as multi-stage build with platform-aware builder stage (golang:1.25.1-alpine) and distroless runtime stage (gcr.io/distroless/static-debian12). Builder compiles with make using GOOS/GOARCH variables; final image copies artifacts and sets ENTRYPOINT to ./main. Working directory changed from /app to /src; xray binaries and share files copied into final image.
Dockerfile.multi Deleted multi-arch Alpine-based build definition that previously accepted TARGETARCH and copied pre-built binaries.
Build script replacement
Makefile
Added chmod +x for scripts/install_xray.sh and replaced all remote installer invocations (curl-based) with local script execution via ./scripts/install_xray.sh.
Local Xray installer
scripts/install_xray.sh
New Bash script that downloads and installs Xray core with OS/architecture detection, validation, optional release tag support, installation to standard directories (/usr/local/bin/xray, /usr/local/share/xray/), error handling, and cleanup.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Areas requiring extra attention:
    • Verify multi-stage Dockerfile build logic and artifact copying from builder to distroless image, particularly xray binary placement and permissions
    • Validate scripts/install_xray.sh architecture detection logic (CPU feature checks for ARM variants) and error paths
    • Confirm removal of multi-architecture pre-builds from CI doesn't break any downstream processes
    • Test that distroless image contains all required runtime dependencies for xray execution

Poem

🐰 Old build scripts bundled tight, now we streamline and simplify,

Multi-stage containers born, distroless runtime flies,

Local scripts replace the curl, binaries built fresh and dry,

Architecture-aware we go, watch the images multiply! 🏗️

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main changes: optimization of the Dockerfile and CI workflows as described in the PR objectives.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
Makefile (1)

94-109: ✗ Missing unzip dependency in update_os target.

The install_xray target relies on scripts/install_xray.sh, which calls unzip -q on line 93. However, none of the distribution-specific branches in update_os (lines 68–88) install unzip—only curl and bash are installed. This will cause the script to fail with "unzip: not found" when invoked on any supported Linux distribution.

Apply this diff to add unzip to all update_os install commands:

 	# Debian/Ubuntu
 	if [ "$(DISTRO)" = "debian" ] || [ "$(DISTRO)" = "ubuntu" ]; then \
-		sudo apt-get update && \
-		sudo apt-get install -y curl bash; \
+		sudo apt-get update && \
+		sudo apt-get install -y curl bash unzip; \
 	fi

 	# Alpine Linux
 	if [ "$(DISTRO)" = "alpine" ]; then \
 		apk update && \
-		apk add --no-cache curl bash; \
+		apk add --no-cache curl bash unzip; \
 	fi

 	# CentOS/RHEL/Fedora
 	if [ "$(DISTRO)" = "centos" ] || [ "$(DISTRO)" = "rhel" ] || [ "$(DISTRO)" = "fedora" ]; then \
 		sudo yum update -y && \
-		sudo yum install -y curl bash; \
+		sudo yum install -y curl bash unzip; \
 	fi

 	# Arch Linux
 	if [ "$(DISTRO)" = "arch" ]; then \
-		sudo pacman -Sy --noconfirm curl bash; \
+		sudo pacman -Sy --noconfirm curl bash unzip; \
 	fi
🧹 Nitpick comments (2)
scripts/install_xray.sh (1)

32-50: CPU feature detection is fragile; consider defensive guards.

Lines 33 and 37 rely on /proc/cpuinfo to detect the vfp (vector floating point) feature for ARM variants. Lines 50 uses lscpu to detect endianness for MIPS64. Both approaches have portability risks:

  • /proc/cpuinfo format and available fields vary by architecture and environment (e.g., QEMU, containers)
  • lscpu may not be available on minimal systems or inside lightweight containers

While the fallback to arm32-v5 (lines 33, 37) is conservative, the script should either verify these tools exist before use or document the assumptions. Consider adding checks or fallback behavior if these commands fail.

Optionally, consider adding defensive checks:

# Line 33 area - add guard for cpuinfo check
if [ -f /proc/cpuinfo ] && grep -q Features /proc/cpuinfo; then
    grep Features /proc/cpuinfo | grep -qw 'vfp' || ARCH='arm32-v5'
fi

# Line 50 area - add guard for lscpu
if command -v lscpu >/dev/null 2>&1; then
    lscpu | grep -q "Little Endian" && ARCH='mips64le'
fi
Dockerfile (1)

22-24: Optimize COPY to reduce final image bloat.

Line 22 copies the entire builder stage's /src directory (including source code, scripts, and build artifacts) into the final distroless image. Distroless images are designed for minimal attack surface and small size. Copying unnecessary build artifacts and source code negates some of these benefits.

Consider copying only the essential runtime artifacts:

-COPY --from=builder /src /app
+COPY --from=builder /src/main /app/main

Then separately copy only the runtime config or data files if needed (e.g., any config.json, data files, etc.). This keeps the final image focused on runtime dependencies only.

Verify which files in /src are actually needed at runtime. If only the binary is needed, the narrower COPY is preferred. If config files or other data are required, copy them explicitly rather than the entire directory.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c4b690b and 0e51b5a.

📒 Files selected for processing (6)
  • .github/workflows/docker-build-dev.yml (1 hunks)
  • .github/workflows/docker-build.yml (1 hunks)
  • Dockerfile (1 hunks)
  • Dockerfile.multi (0 hunks)
  • Makefile (1 hunks)
  • scripts/install_xray.sh (1 hunks)
💤 Files with no reviewable changes (1)
  • Dockerfile.multi
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test
🔇 Additional comments (3)
.github/workflows/docker-build-dev.yml (1)

42-42: ✓ Workflow simplification looks good.

Moving the build process into the multi-stage Dockerfile and eliminating pre-built cross-compiled binaries in the CI workflow is a clean simplification. The use of Docker Buildx with native Go cross-compilation (via GOOS/GOARCH) is the modern, preferred approach.

.github/workflows/docker-build.yml (1)

61-61: ✓ Consistent workflow update for release builds.

Aligns with the dev workflow change to reference a single multi-stage Dockerfile instead of Dockerfile.multi. The release workflow correctly delegates build logic to the Dockerfile.

Dockerfile (1)

14-15: The review comment is based on incorrect assumptions and does not apply to the actual codebase.

The original comment references a non-existent scripts/install_xray.sh file and quotes line 93 from it. In reality:

  1. No local scripts/install_xray.sh exists in the repository.
  2. The Makefile's install_xray target (lines 94–108) downloads and executes an external script directly from GitHub (https://github.com/Gozargah/Marzban-scripts/raw/master/install_latest_xray.sh), not a local script.
  3. The claim about an unzip dependency is unfounded—there is no local script to examine.

The actual Dockerfile (line 23) does execute make install_xray in an Alpine container, which has legitimate concerns (external script execution, curl dependency), but they differ from those claimed in the review.

Likely an incorrect or invalid review comment.

Comment on lines +1 to +10
#!/usr/bin/env bash

# Download Xray latest

RELEASE_TAG="latest"

if [[ "$1" ]]; then
RELEASE_TAG="$1"
fi

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add set -e to ensure Docker build fails on any error.

The script lacks set -e at the top, which means if download_xray fails (returns 1), execution continues to extract_xray. Inside Docker (line 15 of Dockerfile: RUN make install_xray), a failure in a sub-function might not propagate cleanly to stop the build. In the worst case, the build could silently proceed with incomplete artifacts.

Add set -e after the shebang:

 #!/usr/bin/env bash
+set -e
 
 # Download Xray latest
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#!/usr/bin/env bash
# Download Xray latest
RELEASE_TAG="latest"
if [[ "$1" ]]; then
RELEASE_TAG="$1"
fi
#!/usr/bin/env bash
set -e
# Download Xray latest
RELEASE_TAG="latest"
if [[ "$1" ]]; then
RELEASE_TAG="$1"
fi
🤖 Prompt for AI Agents
In scripts/install_xray.sh around lines 1 to 10, the script is missing a safety
flag so failures in functions (like download_xray) don't allow execution to
continue; add "set -e" immediately after the shebang (#!/usr/bin/env bash) so
the script exits on any command failure, ensuring Docker builds fail fast when a
step errors.

@M03ED
Copy link
Contributor

M03ED commented Dec 3, 2025

Thanks for pr
I didn't check this completely yet but after a fast looking at the changes there are things I need to mention
First pr target branch should be dev
Second we have repo for scripts and we manage them there (dev branch is using this repo not old gozargah organization)
Third is my decision for generating binary files on workflow itself, it's because of how slow docker is when you try to generate a binary for arm image on amd64 device
You can test this and see the bad slow time
About using debian I'm not sure and I should check and see the difference

@wavy-cat wavy-cat changed the base branch from main to dev December 4, 2025 02:50
@wavy-cat
Copy link
Contributor Author

wavy-cat commented Dec 4, 2025

Third is my decision for generating binary files on workflow itself, it's because of how slow docker is when you try to generate a binary for arm image on amd64 device

That’s exactly why native cross-compilation was used — so we don’t waste CPU power on QEMU emulation.
In short, I moved the cross-platform Go build from CI to Dockerfile.

As for using Debian, I’m not sure and I should check the difference.

Distroless uses Debian only as a source for system libraries. In practice, Distroless isn’t a real OS: “Distroless images contain only your application and its runtime dependencies. They do not contain package managers, shells, or any other programs you would expect to find in a standard Linux distribution.” See more here.

The binary build process still uses Alpine.

@M03ED M03ED merged commit 4248952 into PasarGuard:dev Dec 4, 2025
2 checks passed
@M03ED
Copy link
Contributor

M03ED commented Dec 4, 2025

let's see how it will affect image build process

@M03ED
Copy link
Contributor

M03ED commented Dec 4, 2025

well it made the build time double instead of optimizing it
before these changes build time was around 2 minute but now it's more than 4 minute

@wavy-cat
Copy link
Contributor Author

wavy-cat commented Dec 4, 2025

Workflow sent cache for 140 seconds:

#37 preparing build cache for export 142.2s done
#37 DONE 142.2s

Last time there weren't many changes, so he only sent 14 seconds:

#28 preparing build cache for export 14.9s done
#28 DONE 14.9s

@M03ED
Copy link
Contributor

M03ED commented Dec 4, 2025

i re-run the workflow and now reached 53 second

#37 preparing build cache for export 53.3s done
#37 DONE 53.3s

i think the reason cache takes this long is because of additional layers new docker file have
final image size is lower about 11mb (42.4 MB -> 31.5 MB) and this one is good

probably i bring back Dockerfile.multi but with debian and see what happens then, probably this will be most efficient way

@wavy-cat
Copy link
Contributor Author

@M03ED What is the reason for replacing distroless with alpine in 3dbc49b?

@M03ED
Copy link
Contributor

M03ED commented Dec 12, 2025

i was facing a execute error, also we need to use alpine for future cases, one of them is wireguard support
also the different is less than 2 mb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants